Current Research in Neurobiology
○ Elsevier BV
Preprints posted in the last 30 days, ranked by how well they match Current Research in Neurobiology's content profile, based on 14 papers previously published here. The average preprint has a 0.00% match score for this journal, so anything above that is already an above-average fit.
O'Connor, S. A.; Narain, P.; Mahajan, A.; Bancroft, G. L.; Haas, H. A.; Wallen-Friedman, E.; Vasisht, S.; Takano, H.; Kiffer, F. C.; Eisch, A. J.; Yun, S.
Show abstract
Environmental stressors rarely affect just one brain circuit. Most studies assess single cognitive endpoints, obscuring whether vulnerabilities are global or circuit-selective and how effects distribute across interconnected systems. To address this, we used galactic cosmic radiation (GCR), a Mars mission-relevant stressor that disrupts the hippocampal-nucleus accumbens-prefrontal circuit. C57BL/6J mice received 33-ion GCR simulation (33-GCR, 0.75 Gy) or sham radiation with the Nrf2-activating compound CDDO-EA or vehicle, followed by multi-domain behavioral testing in both sexes. Under very high memory load, male Veh/33-GCR mice showed enhanced pattern separation compared to Veh/Sham males, an effect normalized by CDDO-EA. Female mice showed no radiation-induced changes in pattern separation but weighed 9-18% more than Veh/Sham females and had reduced locomotor activity. Reward-based learning differed by sex: males showed no changes, while female Veh/33-GCR mice displayed enhanced reward anticipation that was further increased by CDDO-EA alone, with both treatments contributing to elevated goal-tracking. For behavioral flexibility, CDDO-EA impaired reversal learning in males regardless of radiation, while 33-GCR impaired reversal learning in females regardless of CDDO-EA. Principal component analysis revealed that treatments disrupted specific circuit relationships while leaving others intact, consistent with selective rather than global cognitive effects. Fiber photometry showed enhanced dentate gyrus encoding activity in irradiated males under high memory load. Combined CDDO-EA/33-GCR selectively reduced dentate gyrus progenitors in females. Males and females showed distinct, circuit-selective vulnerability patterns, demonstrating that multi-domain, both-sex assessment is necessary to capture how stressors and interventions affect integrated brain function. CDDO-EA proved to be a double-edged sword: protecting one cognitive domain while impairing another, a trade-off invisible to single-endpoint assessment. This framework has immediate relevance for astronaut risk assessment and extends to any context where neuroprotective interventions are evaluated against environmental stressors.
King, C. D.; Zhu, T.; Groh, J. M.
Show abstract
Information about eye movements is necessary for linking auditory and visual information across space. Recent work has suggested that such signals are incorporated into processing at the level of the ear itself (Gruters, Murphy et al. 2018). Here we report confirmation that the eye movement signals that reach the ear can produce perceptual consequences, via a case report of an unusual participant with tensor tympani myoclonus who hears sounds when she moves her eyes. The sounds she hears could be recorded with a microphone in the ear in which she hears them (left), and occurred for large leftward eye movements to extreme orbital positions of the eyes. The sounds elicited by this participants eye movements were reminiscent of eye movement-related eardrum oscillations (EMREOs, (Gruters, Murphy et al. 2018, Brohl and Kayser 2023, King, Lovich et al. 2023, Lovich, King et al. 2023, Lovich, King et al. 2023, Abbasi, King et al. 2025, Sotero Silva, Kayser et al. 2025, King and Groh 2026, Leon, Ramos et al. 2026, Sotero Silva, Brohl et al. 2026)), but were larger and longer lasting than classical EMREOs, helping to explain why they were audible to her. Overall, the observations from this patient help establish that (a) eye movement-related signals specifically reach the tensor tympani muscle and that (b) when there is an abnormality involving that muscle, such signals can lead to actual audible percepts. Given that the tensor tympani contributes to the regulation of sound transmission in the middle ear, these findings support that eye movement signals reaching the ear have functional consequences for auditory perception. The findings also expand the types of medical conditions that produce gaze-evoked tinnitus, to date most commonly observed in connection with acoustic neuromas.
Palou, A.; Tagliabue, M.; Beraneck, M.; Llorens, J.
Show abstract
The rat vestibular system plays a critical role in anti-gravity responses such as the tail-lift reflex and the air-righting reflex. In a previous study in male rats, we obtained evidence that these two reflexes depend on the function of non-identical populations of vestibular sensory hair cells (HC). Here, we caused graded lesions in the vestibular system of female rats by exposing the animals to several different doses of an ototoxic chemical, 3,3-iminodipropionitrile (IDPN). After exposure, we assessed the anti-gravity responses of the rats and then assessed the loss of type I HC (HCI) and type II HC (HCII) in the central and peripheral regions of the crista, utricle and saccule. As expected, we recorded a dose-dependent loss of vestibular function and loss of HCs. The relationship between hair cell loss and functional loss was examined using non-linear models fitted by orthogonal distance regression. The results indicated that both the tail-lift reflex and the air-righting reflexes mostly depend on HCI function. However, a different dependency was found on the epithelium triggering the reflex: while the tail-lift response is sensitive to loss of crista and/or utricle HCIs, the air-righting response rather depends on utricular and/or saccular integrity.
Westner, B. U.; Luo, Y.; Piai, V.
Show abstract
Both episodic memory and word retrieval have been linked to power decreases in the alpha and beta oscillatory bands, but these patterns have rarely been related to each other, partly due to a lack of methodological approaches available. In this explorative study, we investigate the similarities and dissimilarities in the oscillatory fingerprints of the retrieval of words and episodes by directly comparing the activity patterns across time, frequency, and space. We acquired electroencephalography (EEG) data of participants performing a language and an episodic memory task based on the same stimulus material. With a newly developed approach, we directly compared the source-reconstructed oscillatory activity using mutual information and a feature-impact analysis. While left temporal and frontal regions showed dissimilarities between the tasks, right-hemispheric parietal regions exhibited similarities. We speculate that this could indicate a homologous function of these regions, potentially sharing less-specific representations between the tasks. We further uncovered a dissociation of the alpha and beta bands regarding the similarity across tasks. While the beta band was dissimilar between word and episodic memory retrieval, the alpha band seemed to contribute to the similarity we observed in right parietal regions. Whether this points to a task-unspecific function of the alpha band or a functional role in the retrieval process of the presumed representations, remains to be determined. In summary, we present an approach to study similarity across tasks using the temporal, spectral, and spatial dimensions of EEG data, and present results of exploring the shared oscillatory fingerprints between episodic memory and word retrieval.
Eccher, E.; Salva, O. R.; Chiandetti, C.; Vallortigara, G.
Show abstract
Numerical abilities are widespread in the animal kingdom and are not exclusive to humans. Domestic chicks (Gallus gallus) have been shown to discriminate numerosities spontaneously, but prior research has focused exclusively on the visual modality. Whether chicks can discriminate numerical information in the auditory domain remains unknown, despite evidence that they can perceive other auditory features such as tone and rhythm. In this study, we investigated spontaneous numerical discrimination in the auditory modality in naive domestic chicks. In Experiment 1, newly-hatched chicks were tested for their ability to discriminate between two auditory sequences differing in numerosity (4 vs. 12 identical sounds), with and without controlling for continuous variables such as duration and total sound amount. Experiment 2 examined chicks filial imprinting responses to familiar or unfamiliar numerosities. Experiment 3 controlled for potential spontaneous preferences for a single longer sound versus a shorter one. Our results showed a preference for the 12-sound sequence only when duration and total sound amount were not matched. When these continuous variables were controlled, no spontaneous numerical preference emerged. Experiment 2 revealed an overall preference for the 12-sound sequence regardless of imprinting conditions, while Experiment 3 confirmed that chicks do not have an inherent preference for longer sounds. These findings suggest that chicks are sensitive to overall magnitude in the auditory domain but do not spontaneously discriminate numerical differences when other continuous variables are held constant. Future studies will explore how specific stimulus features, such as heterogeneity of sounds, influence these preferences.
Augsten, M.-L.; Lindenbeck, M. J.; Laback, B.
Show abstract
Cochlear implant (CI) users typically experience difficulties perceiving musical harmony due to a restricted spectro-temporal resolution at the electrode-nerve interface, resulting in limited pitch perception. We investigated how stimulus parameters affect discrimination of complex-tone triads (three-voice chords), aiming to identify conditions that maximize perceptual sensitivity. Six post-lingually deafened CI listeners completed a same/different task with harmonic complex tones, while spectral complexity, voice(s) containing a pitch change, and temporal synchrony (simultaneous vs. sequential triad presentation) were manipulated. CI listeners discriminated harmonically relevant one-semitone pitch changes within triads when spectral complexity was reduced to three or five components per voice, with significantly better performance for three-component compared to nine-component tones. Sensitivity was observed for pitch changes in the high voice or in both high and low voices, but not for changes in only the low voice. Single-voice sensitivity predicted simultaneous-triad sensitivity when controlling for spectral complexity and voice with pitch change. Contrary to expectations, sequential triad presentation did not improve discrimination. An analysis of processor pulse patterns suggests that difference-frequency cues encoded in the temporal envelope rather than place-of-excitation cues underlie perceptual triad sensitivity. These findings support reducing spectral complexity to enhance chord discrimination for CI users based on temporal cues.
Owoc, M. S.; Lee, J.; Johnson, A.; Kandler, K.; Sadagopan, S.
Show abstract
The inferior colliculus (IC) integrates ascending auditory and descending multimodal inputs within distinct subdivisions, the central nucleus (CNIC) and cortex (CtxIC). Despite differences in connectivity, auditory responses in these subdivisions are similar, complicating localization during in-vivo recordings. Here, we tested whether recordings can be assigned to CNIC or CtxIC using only response properties in awake and anesthetized mice. We constructed frequency response areas (FRAs) from pure tone responses and extracted tuning and firing metrics. Individual FRA features could not reliably localize recordings. In contrast, a random forest classifier combining FRA-derived features accurately localized recordings to CNIC or CtxIC across states. These findings demonstrate that while IC subdivisions differ only subtly along individual response parameters, appropriate multiparametric approaches can enable robust classification. More broadly, our results illustrate how biologically meaningful distinctions may be revealed by combining weakly informative features, an approach that can be applied across diverse brain regions and modalities.
Ocana, F. M.; Gomez, A.; Salas, C.; Rodriguez, F.
Show abstract
The functional organization of the teleost telencephalic pallium remains poorly understood, particularly regarding the presence of modality-specific sensory domains and their topographic arrangement. Here, we used in vivo wide-field voltage-sensitive dye imaging to map sensory-evoked neural activity across the dorsal surface of the telencephalic pallium of adult goldfish. Somatosensory, auditory, gustatory, and visual stimulation revealed distinct, modality-specific domains primarily located within the dorsomedial (Dm) and dorsolateral (Dl) pallium. Within Dm, somatosensory and auditory stimuli activated partially overlapping territories in the caudal subregion (Dm4), exhibiting clear somatotopic and tonotopic organization along the mediolateral axis. Gustatory stimulation selectively engaged Dm3, where different tastants activated spatially distinct but partially overlapping domains. A more rostral subregion (Dm2) responded only to high-intensity somatosensory stimulation, suggesting involvement in processing negatively valenced inputs. Visual stimulation activated a circumscribed area within the dorsolateral pallium (Dld2),that closely matched cytoarchitectural boundaries. Pharmacological blockade of ionotropic glutamate receptors markedly reduced sensory-evoked responses, indicating that these maps depend on glutamatergic synaptic transmission. Together, these findings show that the goldfish pallium contains distinct, spatially organized sensory representations and a refined internal functional architecture. This organization suggests that pallial topographic sensory maps may not be exclusive to mammals and birds. Based on these results, we propose that dorsomedial and dorsolateral pallial regions may be functionally comparable to components of the mammalian mesocortical network, more than to the pallial amygdala or the neocortex. This framework provides a new perspective on pallial organization in teleosts and contributes to understanding the evolutionary origins of the vertebrate pallium. HIGHLIGHTSO_LIVoltage-sensitive dye imaging was used to map sensory responses in the goldfish pallium. C_LIO_LIDistinct sensory areas for somatosensory, auditory, gustatory, and visual modalities were identified. C_LIO_LISome sensory regions in Dm show topographically organized maps. C_LIO_LIFunctional segregation suggests a complex, non-diffuse pallial organization. C_LIO_LIFindings support a novel hypothesis linking Dm and Dld to mammalian mesocortical regions. C_LI
Neely, S. T.; Harris, S. E.; Hajicek, J. J.; Petersen, E. A.; Shen, Y.
Show abstract
In a loudness-matching paradigm, a reduction in the loudness of sounds with bandwidths less than one-half octave compared to a tone of equal sound pressure level has been observed previously for five-tone complexes at 60 dB SPL centered at 1 kHz. Here, this loudness-reduction phenomenon is explored using band-limited noise across wide ranges of frequency and level. Additionally, these measurements are simulated by a model of loudness judgement based on neural ensemble averaging (NEA), which serves as a proxy for central auditory signal processing. Multi-frequency equal-loudness contours (ELC) were measured for each of the adult participants (N=100) with pure-tone average (PTA) thresholds that ranged from normal to moderate hearing loss using a categorical-loudness-scaling (CLS) paradigm. Presentation level and center frequency of the test stimuli were determined on each trial according to a Bayesian adaptive algorithm, which enabled multi-frequency ELC estimation within about five minutes of testing. Three separate test conditions differed by stimulus type: (1) pure-tone, (2) quarter-octave noise and (3) octave noise. For comparison, loudness judgements for all three stimulus types were also simulated by the NEA model, which comprised a nonlinear, active, time-domain cochlear model with an appended stage of neural spike generation. Mid-bandwidth loudness reduction was observed to be greatest at moderate stimulus levels and frequencies near 1 kHz. This feature was approximated by the NEA model, which suggests involvement of an early stage of the central auditory system in the formation of loudness judgements.
Siebert, R.; Taubert, N.; Giese, M. A.; Thier, P.
Show abstract
Facial displays are an important channel of communication for primates, yet it remains unclear based on which criteria monkeys evaluate facial expressions. We trained two rhesus macaques to categorize videos of four facial expression types (neutral, lip-smacking, silent bared-teeth (SBT) and open-mouth threat displays) and then tested generalization to novel individuals and avatars while measuring pupillary responses. Monkeys were able to sort facial expressions into categories and generalized to new, untrained videos, albeit imperfectly. Confusions did not occur due to visual similarity between expressions, demonstrated through a novel automated method integrating deep learning-based motion tracking with the Macaque Facial Action Coding System, enabling objective quantification of facial action units. Instead, open-mouth threats were readily categorized as such and distinguished from lip-smacking while eliciting strongest arousal, consistent with an innate predisposition for threat detection. By contrast, categorization of SBT displays varied substantially across stimulus identities, influenced by body weight, gaze direction, and coordinated movements of eyebrows and ears. Morphed avatar expressions were categorized according to expression component intensity, demonstrating graded perception. Avatar manipulations revealed that categorization was robust against lack of coherent motion, lack of realism beyond a basic level of texture, and mostly transcended species form boundaries to a human face. However, human facial expressions elicited random categorizations and no differential arousal, highlighting the necessity of species-specific facial motion. These findings demonstrate that rhesus macaques perceive facial expressions as functionally meaningful, context-dependent social signals shaped by both the expression itself and signaler characteristics, rather than fixed morphological categories.
Stowell, D.; Nolasco, I.; McEwen, B.; Vidana Vila, E.; Jean-Labadye, L.; Benhamadi, Y.; Lostanlen, V.; Dubus, G.; Hoffman, B.; Linhart, P.; Morandi, I.; Cazau, D.; White, E.; White, P.; Miller, B.; Nguyen Hong Duc, P.; Schall, E.; Parcerisas, C.; Gros-Martial, A.; Moummad, I.
Show abstract
Computational bioacoustics has seen significant advances in recent decades. However, the rate of insights from automated analysis of bioacoustic audio lags behind our rate of collecting the data - due to key capacity constraints in data annotation and bioacoustic algorithm development. Gaps in analysis methodology persist: not because they are intractable, but because of resource limitations in the bioacoustics community. To bridge these gaps, we advocate the open science method of data challenges, structured as public contests. We conducted a bioacoustics data challenge named BioDCASE, within the format of an existing event (DCASE). In this work we report on the procedures needed to select and then conduct useful bioacoustics data challenges. We consider aspects of task design such as dataset curation, annotation, and evaluation metrics. We report the three tasks included in BioDCASE 2025 and the resulting progress made. Based on this we make recommendations for open community initiatives in computational bioacoustics.
Dong, C.; Wang, Z.; Zuo, X.; Wang, S.
Show abstract
Interpersonal communication relies on integrating facial and vocal signals to extract multidimensional communicative information. How the absence of audition reshapes the communicative system remains unclear. We compared the performance of deaf (N=136) and hearing (N=135) adults across multiple domains, facial identity, emotional expression, speech, and global motion, through a series of unisensory and audiovisual psychophysical tasks. The results showed that, in hearing individuals, reliance on facial versus vocal signals differed across domains. In deaf individuals, auditory deprivation did not produce uniform enhancement or impairment of visual processing. Instead, they exhibited reduced sensitivity to dynamic emotional expressions and global motion, preserved sensitivity to facial identity (both static and dynamic) and static expressions, and enhanced categorization of facial speech. Notably, sensitivity to dynamic facial expressions and global motion was correlated, and both were explained by variations in fluid intelligence. Our results provide a systematic characterization of visual function across domains in deaf individuals, suggesting that the consequences of hearing loss are shaped both by the functional roles of audition within each domain and by broader cognitive adaptations. These findings advance understanding of cross-modal plasticity and inform the development of targeted ecologically valid accessibility and sensory-substitution strategies.
DeWitt-Batt, S. L.; DeMann, K. E.; Houck, C. J.; Larson, C. L.; Horsburgh, L. A.; Thomas, E. A.; Sanchez, L.; Calvo-Ochoa, E.
Show abstract
Hypoxic-ischemic injury is a major cause of olfactory dysfunction, yet the cellular and morphological mechanisms underlying this sensory loss remain poorly understood. Here, we investigated the structural, cellular, and functional effects of acute hypoxic exposure on the olfactory system of adult zebrafish (Danio rerio) of both sexes, a model organism with remarkable neuroregenerative capacity. Fish were subjected to 15 minutes of acute severe hypoxia (0.8 mg/L dissolved oxygen) and assessed at 1 and 5 days post-hypoxia (dph). We evaluated olfactory function by means of cadaverine-evoked aversive behavioral assays. Structural and morphological integrity and inflammation of the olfactory epithelium (OE) and olfactory bulb (OB) were characterized using immunohistochemistry, histological stainings, and a 2,3,5-triphenyltetrazolium chloride (TTC) colorimetric assay. Acute hypoxic exposure impaired olfactory-mediated behaviors without affecting locomotion or exploratory behavior. In the peripheral OE, hypoxia caused neurodegeneration, disruption of the nasal mucus layer, and robust leukocytic infiltration. We observed reduced mitochondrial dehydrogenase activity in the olfactory bulb (OB) along with reactive astrogliosis. Olfactory function recovered by 5 days, coinciding with full restoration of OE morphology, and supported by a strong proliferative response. These findings reveal a coordinated degenerative and regenerative response to hypoxia across the olfactory axis, with implications for understanding hypoxia-induced sensory loss and neural repair. SIGNIFICANCEThis work addresses an important gap in knowledge regarding the mechanisms linking hypoxic insult and olfactory dysfunction. By using adult zebrafish, an extraordinarily regenerative vertebrate, it also provides insight into neuronal repair and regenerative processes supporting olfactory recovery. The novelty of our study resides in that, to our knowledge, there are no studies that provide a comprehensive characterization of the effects of hypoxia in the olfactory system across molecular, histological, and functional levels. These findings advance our understanding of hypoxia-induced sensory neurodegeneration and regeneration, and highlight the zebrafish olfactory system as a powerful model for investigating neural repair mechanisms relevant to hypoxic-ischemic brain injury.
Liu, P.; Bo, K.; Chen, Y.; Keil, A.; Ding, M.; Fang, R.
Show abstract
Emotion reshapes perception by modulating sensory processing through top-down feedback--a process referred to as emotional perception. The computational mechanisms by which distinct affective signals influence visual representations however remain poorly understood. Here, we use a deep neural network to simulate this process and test mechanistic hypotheses about how top-down feedback guides emotional peception. Most existing models treat the perception of emotional content as a static, feedforward task, overlooking the dynamic interplay between internal states, external goals, and sensory input that characterizes affective perception in the brain. We introduce EmoFB, a biologically inspired model that integrates an affective system with a visual processing hierarchy through two functionally distinct feedback signals: intrinsic feedback, arising from the models own affective appraisal of perceptual input, and external steering, conveying contextual priors such as task expectations or target categories. We evaluated EmoFB on three tasks varying in perceptual ambiguity (Single Image, Side-by-Side, and Overlay). External steering exerted the strongest influence, not only improving recognition under challenging conditions but also restructuring internal representations by sharpening category-specific clustering in feature space. Crucially, top-down feedback increased brain-model representational similarity, strengthening alignment with human fMRI responses across early visual cortex, ventral visual areas, and the amygdala. EmoFB provides a computational framework for testing neurocognitive theories of emotion appraisal and top-down feedback modulation. It bridges affective neuroscience and artificial intelligence, offering mechanistic insight into how emotional signals shape perception in both brains and machines.
Vivion, M.; Mathy, F.; Guida, A.; Mondot, L.; Ramanoel, S.
Show abstract
Spatialization in working memory refers to the spatial coding of non-spatial information along a mental horizontal line when encoding verbal material. This phenomenon is thought to support working memory by facilitating order encoding. Although it has been observed for both visually and auditorily presented stimuli, no direct comparison has yet examined whether these modalities rely on similar neural mechanisms. In this study, we investigated whether spatialization in visual and auditory modalities involves shared or distinct patterns of activity within the working-memory network. Forty-nine participants performed both a visual and an auditory working memory SPoARC task of the same verbal material, allowing to study the cortical patterns associated with distinct serial positions at both encoding and recognition across sensory modalities. Whole-brain analyses revealed similar frontoparietal networks across conditions. In addition, a representational similarity analysis (RSA) was conducted to assess the similarity of neural patterns between early and late serial positions in a sequence and across sensory modalities. This multivoxel pattern analysis revealed modality-dependent patterns distinguishing early and late positions in the inferior frontal gyrus. Additional modality-specific effects were observed in the anterior intraparietal sulcus in the visual modality and in the posterior hippocampus in the auditory modality. Drawing on the framework proposed by Bottini & Doeller (2020), we propose that order decoding in the IPS might reflect a low-dimensional spatial coding of order (e.g., along a horizontal axis), whereas order decoding in the hippocampus might reflect higher-dimensional spatial representations or temporal representations.
Borrajo, M.; Callejo, A.; CASTELLANOS, E.; Amilibia, E.; Llorens, J.
Show abstract
Vestibular schwannomas (VS) cause vestibular function loss by mechanisms still poorly understood. We evaluated the vestibulo-ocular reflex by the video-assisted Head Impulse Test (vHIT) in patients with planned tumour resection by a trans-labyrinthine approach. The vestibular sensory epithelia were collected and processed by immunofluorescent labelling for confocal microscopy analysis of sensory hair cell subtypes (type I, HCI, and type II, HCII), calyx endings of the pure-calyx afferents, and the calyceal junction normally found between HCI and the calyx (n=23). Comparing Normofunction and Hypofunction patients, we concluded that worse vestibular function associates with decreased HCI and HCII counts in the sensory epithelia and with increased proportion of damaged calyces. A decrease in the number of HCI and calyx endings of the pure-calyx afferents was recorded to associate with age increase. Partial least squares regression (PLSR) models indicated that VS and age had independent, additive effects on vestibular function. Correlation analyses indicated that lower vHIT gains associate with lower numbers of HCI and increased percentages of damaged calyces. These data support the hypothesis that the deleterious effect of VS on vestibular function is mediated, at least in part, by its damaging impact on the vestibular sensory epithelium. They also provide further evidence for the dependency of the vestibulo-ocular reflex on HCI function and for the calyceal junction pathology as a common response of the sensory epithelium to HC stress.
Hamilton, J. J.; Berriman, L.; Harrison-Best, S.; Dalrymple-Alford, J. C.; Mitchell, A. S.
Show abstract
Cognitive flexibility, switching behaviour responses to changing task demands, is classically attributed to the prefrontal cortex. Yet thalamocortical circuits involving the mediodorsal thalamus (MD) and thalamic nucleus reuniens (Re) are dysfunctional across a range of neurological conditions with cognitive flexibility deficits. Interventions involving thalamocortical interactions may offer therapeutic benefits. Here we examined the effects of MD or Re bilateral glutamatergic neurotoxic damage in rats on cognitive flexibility using the attentional set-shifting task. Rats must attend to a sensory dimension that reliably predicts reward (intradimensional shift, ID) followed by a shift in attention to a previously irrelevant sensory dimension when contingencies change (extradimensional shift, ED). We found MD rats required more trials to criterion in the ED, while Re rats showed significant impairments on the first of three ID subtasks (ID1) only. Both MD and Re rats required more trials to criterion to complete each subtask than Sham controls. Intraperitoneal noradrenaline (atipamezole 1mg/kg), given 30 minutes prior to the task reduced trials to criterion across all rats, improving cognitive flexibility even after thalamic damage. These findings demonstrate the influence MD and Re contribute to cognitive flexibility and support noradrenergic regulation of thalamocortical circuits as potential therapeutic targets for cognitive flexibility dysfunction.
Yang, J.; Carter, O.; Shivdasani, M. N.; Grayden, D. B.; Hester, R.; Barutchu, A.
Show abstract
Selective attention enables the prioritization of task-relevant information while managing distractors, and steady-state visual evoked potentials (SSVEPs) are widely used to track this process by tagging different visual objects at distinct flicker frequencies. However, whether the choice of tagging frequency itself influences other neural and cognitive measures remains unclear. Here, 27 participants performed detection and 1-back working memory tasks while a central target and peripheral distractors flickered at either 8.6 Hz or 12 Hz. The working memory task produced slower responses, more errors, and greater perceived difficulty than detection. Tagging frequency strongly shaped neural responses, with 8.6 Hz eliciting higher SSVEP signal-to-noise ratios than 12 Hz regardless of stimulus location. Nevertheless, stronger SSVEP responses for centrally attended stimuli were associated with fewer working memory errors and larger early visual ERP responses, while SSVEPs for attended and distractor stimuli were negatively correlated. In addition, the working memory task produced a larger P1-N1 peak-to-peak difference, and tagging frequency altered the timing and amplitude of early ERP effects. Together, these findings show that tagging frequency is not a neutral methodological parameter, but one that shapes both neural indices of attention and their relationship to cognitive performance.
Lebenstein-Gumovski, M.; Romanenko, Y.; Kovalev, D.; Rasueva, T.; Canavero, S.; Zhirov, A.; Talypov, A.; Grin', A.
Show abstract
IntroductionThe exploration of alternative strategies for neural tissue regeneration and repair is giving rise to a novel paradigm in neurosurgery: fusogenic therapy. This approach promises rapid restoration of peripheral nerve and spinal cord function by circumventing Wallerian degeneration and eliminating the delay associated with axonal regrowth. Its potential stems from the capacity of fusogens to induce axonal fusion and achieve immediate membrane sealing, complemented by their pronounced neuroprotective properties. However, experimental data on fusogens and their effects are inconsistent, often contentious, and derived using heterogeneous methodologies. MethodsWe present the first comprehensive systematic review covering nearly four decades of research on fusogens for axonal membrane repair and 26 years of their experimental and clinical application in mammalian and human models for peripheral and central nervous system restoration. The review includes a meta-analysis of fusogen efficacy following traumatic spinal cord and peripheral nerve injuries. ResultsConducted in accordance with the PRISMA 2020 flow protocol and PICO criteria, our analysis incorporates 86 sources, 20 of which were included in the meta-analysis. DiscussionIn summary, we have systematized the prevailing approaches and methods for fusogen application, delineated key contentious issues, and identified promising directions for the development of axonal fusion technology.
Figarola, V.; Liang, W.; Luthra, S.; Parker, E.; Winn, M.; Brown, C.; Shinn-Cunningham, B. G.
Show abstract
Listeners face many challenges when trying to maintain attention to a target source in everyday settings; for instance, reverberation distorts acoustic cues and interruptions capture attention. However, little is known about how these challenges affect the ability to maintain selective attention. Here, we measured syllable recall accuracy and pupil dilation during a spatial selective attention task that was sometimes disrupted. Participants heard two competing, temporally interleaved syllable streams presented in pseudo-anechoic or reverberant environments. On randomly selected trials, a sudden interruption occurred mid-sequence. Compared to anechoic trials, reverberant performance was worse overall, and the interrupter disrupted performance. In uninterrupted trials, reverberation reduced peak pupil dilation both when it was consistent across all stimuli in a block and when it was randomized trial to trial, suggesting temporal smearing reduced clarity of the scene and the salience of events in the ongoing streams. Pupil dilations in response to interruptions indicated perceptual salience was strong across reverberant and anechoic conditions. Specifically, baseline pupil size before trials did not vary across room conditions, and mixing or blocking of trials (altering stimulus expectations) had no impact on pupillary responses. Together, these findings highlight that stimulus salience drives cognitive load more strongly than does task performance.